954 resultados para Local linearization methods
Resumo:
Conventional procedures used to assess the integrity of corroded piping systems with axial defects generally employ simplified failure criteria based upon a plastic collapse failure mechanism incorporating the tensile properties of the pipe material. These methods establish acceptance criteria for defects based on limited experimental data for low strength structural steels which do not necessarily address specific requirements for the high grade steels currently used. For these cases, failure assessments may be overly conservative or provide significant scatter in their predictions, which lead to unnecessary repair or replacement of in-service pipelines. Motivated by these observations, this study examines the applicability of a stress-based criterion based upon plastic instability analysis to predict the failure pressure of corroded pipelines with axial defects. A central focus is to gain additional insight into effects of defect geometry and material properties on the attainment of a local limit load to support the development of stress-based burst strength criteria. The work provides an extensive body of results which lend further support to adopt failure criteria for corroded pipelines based upon ligament instability analyses. A verification study conducted on burst testing of large-diameter pipe specimens with different defect length shows the effectiveness of a stress-based criterion using local ligament instability in burst pressure predictions, even though the adopted burst criterion exhibits a potential dependence on defect geometry and possibly on material`s strain hardening capacity. Overall, the results presented here suggests that use of stress-based criteria based upon plastic instability analysis of the defect ligament is a valid engineering tool for integrity assessments of pipelines with axial corroded defects. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this work, we have studied the influence of the substrate surface condition on the roughness and the structure of the nanostructured DLC films deposited by high-density plasma chemical vapor deposition Four methods were used to modify the silicon wafers surface before starting the deposition processes of the nanostructured DLC films. micro-diamond powder dispersion, micro-graphite powder dispersion, and roughness generation by wet chemical etching and roughness generation by plasma etching. The reference wafer was only submitted to a chemical cleaning. It was possible to see that the final roughness and the sp(3) hybridization degree (that is related with the structure and chemical composition) strongly depend on the substrate surface conditions The surface roughness was observed by AFM and SEM and the hybridization degree of the DLC films was analyzed by Raman Spectroscopy Thus, the effects of the substrate surface on the DLC film structure were confirmed. These phenomena can be explained by the fact that the locally higher surface energy and the sharp edges may induce local defects promoting the nanostructured characteristics in the DLC films. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this work, we have studied the influence of the substrate surface condition on the roughness and the structure of the nanostructured DLC films deposited by High Density Plasma Chemical Vapor Deposition. Four methods were used to modify the silicon wafers surface before starting the deposition processes of the nanostructured DLC films: micro-diamond powder dispersion, micro-graphite powder dispersion, and roughness generation by wet chemical etching and roughness generation by plasma etching. The reference wafer was only submitted to a chemical cleaning. It was possible to see that the final roughness and the sp(3) hybridization degree strongly depend on the substrate surface conditions. The surface roughness was observed by AFM and SEM and the hybridization degree of the DLC films was analyzed by Raman Spectroscopy. In these samples, the final roughness and the sp(3) hybridization quantity depend strongly on the substrate surface condition. Thus, the effects of the substrate surface on the DLC film structure were confirmed. These phenomena can be explained by the fact that the locally higher surface energy and the sharp edges may induce local defects promoting the nanostructured characteristics in the DLC films. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This work considers a semi-implicit system A, that is, a pair (S, y), where S is an explicit system described by a state representation (x)over dot(t) = f(t, x(t), u(t)), where x(t) is an element of R(n) and u(t) is an element of R(m), which is subject to a set of algebraic constraints y(t) = h(t, x(t), u(t)) = 0, where y(t) is an element of R(l). An input candidate is a set of functions v = (v(1),.... v(s)), which may depend on time t, on x, and on u and its derivatives up to a Finite order. The problem of finding a (local) proper state representation (z)over dot = g(t, z, v) with input v for the implicit system Delta is studied in this article. The main result shows necessary and sufficient conditions for the solution of this problem, under mild assumptions on the class of admissible state representations of Delta. These solvability conditions rely on an integrability test that is computed from the explicit system S. The approach of this article is the infinite-dimensional differential geometric setting of Fliess, Levine, Martin, and Rouchon (1999) (`A Lie-Backlund Approach to Equivalence and Flatness of Nonlinear Systems`, IEEE Transactions on Automatic Control, 44(5), (922-937)).
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
This work summarizes some results about static state feedback linearization for time-varying systems. Three different necessary and sufficient conditions are stated in this paper. The first condition is the one by [Sluis, W. M. (1993). A necessary condition for dynamic feedback linearization. Systems & Control Letters, 21, 277-283]. The second and the third are the generalizations of known results due respectively to [Aranda-Bricaire, E., Moog, C. H., Pomet, J. B. (1995). A linear algebraic framework for dynamic feedback linearization. IEEE Transactions on Automatic Control, 40, 127-132] and to [Jakubczyk, B., Respondek, W. (1980). On linearization of control systems. Bulletin del` Academie Polonaise des Sciences. Serie des Sciences Mathematiques, 28, 517-522]. The proofs of the second and third conditions are established by showing the equivalence between these three conditions. The results are re-stated in the infinite dimensional geometric approach of [Fliess, M., Levine J., Martin, P., Rouchon, P. (1999). A Lie-Backlund approach to equivalence and flatness of nonlinear systems. IEEE Transactions on Automatic Control, 44(5), 922-937]. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
The recent claim that the exit probability (EP) of a slightly modified version of the Sznadj model is a continuous function of the initial magnetization is questioned. This result has been obtained analytically and confirmed by Monte Carlo simulations, simultaneously and independently by two different groups (EPL, 82 (2008) 18006; 18007). It stands at odds with an earlier result which yielded a step function for the EP (Europhys. Lett., 70 (2005) 705). The dispute is investigated by proving that the continuous shape of the EP is a direct outcome of a mean-field treatment for the analytical result. As such, it is most likely to be caused by finite-size effects in the simulations. The improbable alternative would be a signature of the irrelevance of fluctuations in this system. Indeed, evidence is provided in support of the stepwise shape as going beyond the mean-field level. These findings yield new insight in the physics of one-dimensional systems with respect to the validity of a true equilibrium state when using solely local update rules. The suitability and the significance to perform numerical simulations in those cases is discussed. To conclude, a great deal of caution is required when applying updates rules to describe any system especially social systems. Copyright (C) EPLA, 2011
Resumo:
This work considers a nonlinear time-varying system described by a state representation, with input u and state x. A given set of functions v, which is not necessarily the original input u of the system, is the (new) input candidate. The main result provides necessary and sufficient conditions for the existence of a local classical state space representation with input v. These conditions rely on integrability tests that are based on a derived flag. As a byproduct, one obtains a sufficient condition of differential flatness of nonlinear systems. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
An important topic in genomic sequence analysis is the identification of protein coding regions. In this context, several coding DNA model-independent methods based on the occurrence of specific patterns of nucleotides at coding regions have been proposed. Nonetheless, these methods have not been completely suitable due to their dependence on an empirically predefined window length required for a local analysis of a DNA region. We introduce a method based on a modified Gabor-wavelet transform (MGWT) for the identification of protein coding regions. This novel transform is tuned to analyze periodic signal components and presents the advantage of being independent of the window length. We compared the performance of the MGWT with other methods by using eukaryote data sets. The results show that MGWT outperforms all assessed model-independent methods with respect to identification accuracy. These results indicate that the source of at least part of the identification errors produced by the previous methods is the fixed working scale. The new method not only avoids this source of errors but also makes a tool available for detailed exploration of the nucleotide occurrence.
Resumo:
In this paper, we present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Purpose Among environmental factors governing innumerous processes that are active in estuarine environments, those of edaphic character have received special attention in recent studies. With the objectives of determining the spatial patterns of soil attributes and components across different mangrove forest landscapes and obtaining additional information on the cause-effect relationships between these variables and position within the estuary, we analyzed several soil attributes in 31 mangrove soil profiles from the state of So Paulo (Guaruja, Brazil). Materials and methods Soil samples were collected at low tide along two transects within the CrumahA(0) mangrove forest. Samples were analyzed to determine pH, Eh, salinity, and the percentages of sand, silt, clay, total organic carbon (TOC), and total S. Mineralogy of the clay fraction (< 2 mm) was also studied by X-ray diffraction analysis, and partitioning of solid-phase Fe was performed by sequential extraction. Results and discussion The results obtained indicate important differences in soil composition at different depths and landscape positions, causing variations in physicochemical parameters, clay mineralogy, TOC contents, and iron geochemistry. The results also indicate that physicochemical conditions may vary in terms of different local microtopographies. Soil salinity was determined by relative position in relation to flood tide and transition areas with highlands. The proportions of TOC and total S are conditioned by the sedimentation of organic matter derived from vegetation and by the prevailing redox conditions, which clearly favored intense sulfate reduction in the soils (similar to 80% of the total Fe is Fe-pyrite). Particle-size distribution is conditioned by erosive/deposition processes (present and past) and probably by the positioning of ancient and reworked sandy ridges. The existing physicochemical conditions appear to contribute to the synthesis (smectite) and transformation (kaolinite) of clay minerals. Conclusions The results demonstrate that the position of soils in the estuary greatly affects soil attributes. Differences occur even at small scales (meters), indicating that both edaphic (soil classification, soil mineralogy, and soil genesis) and environmental (contamination and carbon stock) studies should take such variability into account.
Resumo:
The effect of thermal treatment on phenolic compounds and type 2 diabetes functionality linked to alpha-glucosidase and alpha-amylase inhibition and hypertension relevant angiotensin I-converting enzyme (ACE) inhibition were investigated in selected bean (Phaseolus vulgaris L,) cultivars from Peru and Brazil using in vitro models. Thermal processing by autoclaving decreased the total phenolic content in all cultivars, whereas the 1,1-diphenyl-2-picrylhydrazyl radical scavenging activity-linked antioxidant activity increased among Peruvian cultivars, alpha-Amylase and alpha-glucosidase inhibitory activities were reduced significantly after heat treatment (73-94% and 8-52%, respectively), whereas ACE inhibitory activity was enhanced (9-15%). Specific phenolic acids such as chlorogenic and caffeic acid increased moderately following thermal treatment (2-16% and 5-35%, respectively). No correlation was found between phenolic contents and functionality associated to antidiabetes and antihypertension potential, indicating that non phenolic compounds may be involved. Thermally processed bean cultivars are interesting sources of phenolic acids linked to high antioxidant activity and show potential for hypertension prevention.