959 resultados para Stated preference methods
Resumo:
Nonlinear equations in mathematical physics and engineering are solved by linearizing the equations and forming various iterative procedures, then executing the numerical simulation. For strongly nonlinear problems, the solution obtained in the iterative process can diverge due to numerical instability. As a result, the application of numerical simulation for strongly nonlinear problems is limited. Helicopter aeroelasticity involves the solution of systems of nonlinear equations in a computationally expensive environment. Reliable solution methods which do not need Jacobian calculation at each iteration are needed for this problem. In this paper, a comparative study is done by incorporating different methods for solving the nonlinear equations in helicopter trim. Three different methods based on calculating the Jacobian at the initial guess are investigated. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
Present study performs the spatial and temporal trend analysis of annual, monthly and seasonal maximum and minimum temperatures (t(max), t(min)) in India. Recent trends in annual, monthly, winter, pre-monsoon, monsoon and post-monsoon extreme temperatures (t(max), t(min)) have been analyzed for three time slots viz. 1901-2003,1948-2003 and 1970-2003. For this purpose, time series of extreme temperatures of India as a whole and seven homogeneous regions, viz. Western Himalaya (WH), Northwest (NW), Northeast (NE), North Central (NC), East coast (EC), West coast (WC) and Interior Peninsula (IP) are considered. Rigorous trend detection analysis has been exercised using variety of non-parametric methods which consider the effect of serial correlation during analysis. During the last three decades minimum temperature trend is present in All India as well as in all temperature homogeneous regions of India either at annual or at any seasonal level (winter, pre-monsoon, monsoon, post-monsoon). Results agree with the earlier observation that the trend in minimum temperature is significant in the last three decades over India (Kothawale et al., 2010). Sequential MK test reveals that most of the trend both in maximum and minimum temperature began after 1970 either in annual or seasonal levels. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]
Resumo:
Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Fastest curve-fitting procedures are proposed for vertical and radial consolidations for rapid loading methods. In vertical consolidation, the next load increment can be applied at 50-60% consolidation (or even earlier if the compression index is known). In radial consolidation, the next load increment can be applied at just 10-15% consolidation. The effects of secondary consolidation on the coefficient of consolidation and ultimate settlement are minimized in both cases. A quick procedure is proposed in vertical consolidation that determines how far is calculated from the true , where is coefficient of consolidation. In radial consolidation no such procedure is required because at 10-15% the consolidation effects of secondary consolidation are already less in most inorganic soils. The proposed rapid loading methods can be used when the settlement or time of load increment is not known. The characteristic features of vertical, radial, three-dimensional, and secondary consolidations are given in terms of the rate of settlement. A relationship is proposed between the coefficient of the vertical consolidation, load increment ratio, and compression index. (C) 2013 American Society of Civil Engineers.
Resumo:
Two multicriterion decision-making methods, namely `compromise programming' and the `technique for order preference by similarity to an ideal solution' are employed to prioritise 22 micro-catchments (A1 to A22) of Kherthal catchment, Rajasthan, India and comparative analysis is performed using the compound parameter approach. Seven criteria - drainage density, bifurcation ratio, stream frequency, form factor, elongation ratio, circulatory ratio and texture ratio - are chosen for the evaluation. The entropy method is employed to estimate weights or relative importance of the criterion which ultimately affects the ranking pattern or prioritisation of micro-catchments. Spearman rank correlation coefficients are estimated to measure the extent to which the ranks obtained are correlated. Based on the average ranking approach supported by sensitivity analysis, micro-catchments A6, A10, A3 are preferred (owing to their low ranking) for further improvements with suitable conservation and management practices, and other micro-catchments can be processed accordingly at a later phase on a priority basis. It is concluded that the present approach can be explored for other similar situations with appropriate modifications.
Resumo:
A principal hypothesis for the evolution of leks (rare and intensely competitive territorial aggregations) is that leks result from females preferring to mate with clustered males. This hypothesis predicts more female visits and higher mating success per male on larger leks. Evidence for and against this hypothesis has been presented by different studies, primarily of individual populations, but its generality has not yet been formally investigated. We took a meta-analytical approach towards formally examining the generality of such a female bias in lekking species. Using available published data and using female visits as an index of female mating bias, we estimated the shape of the relationship between lek size and total female visits to a lek, female visits per lekking male and, where available, per capita male mating success. Individual analyses showed that female visits generally increased with lek size across the majority of taxa surveyed; the meta-analysis indicated that this relationship with lek size was disproportionately positive. The findings from analysing per capita female visits were mixed, with an increase with lek size detected in half of the species, which were, however, widely distributed taxonomically. Taken together, these findings suggest that a female bias for clustered males may be a general process across lekking species. Nevertheless, the substantial variation seen in these relationships implies that other processes are also important. Analyses of per capita copulation success suggested that, more generally, increased per capita mating benefits may be an important selective factor in lek maintenance.
Resumo:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.
Resumo:
A review of high operating temperature (HOT) infrared (IR) photon detector technology vis-a-vis material requirements, device design and state of the art achieved is presented in this article. The HOT photon detector concept offers the promise of operation at temperatures above 120 K to near room temperature. Advantages are reduction in system size, weight, cost and increase in system reliability. A theoretical study of the thermal generation-recombination (g-r) processes such as Auger and defect related Shockley Read Hall (SRH) recombination responsible for increasing dark current in HgCdTe detectors is presented. Results of theoretical analysis are used to evaluate performance of long wavelength (LW) and mid wavelength (MW) IR detectors at high operating temperatures. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this article, we derive an a posteriori error estimator for various discontinuous Galerkin (DG) methods that are proposed in (Wang, Han and Cheng, SIAM J. Numer. Anal., 48: 708-733, 2010) for an elliptic obstacle problem. Using a key property of DG methods, we perform the analysis in a general framework. The error estimator we have obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method. In the analysis, we construct a non-linear smoothing function mapping DG finite element space to CG finite element space and use it as a key tool. The error estimator consists of a discrete Lagrange multiplier associated with the obstacle constraint. It is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes. Finally, numerical results demonstrating the performance of the error estimator are presented.
Resumo:
Energy research is to a large extent materials research, encompassing the physics and chemistry of materials, including their synthesis, processing toward components and design toward architectures, allowing for their functionality as energy devices, extending toward their operation parameters and environment, including also their degradation, limited life, ultimate failure and potential recycling. In all these stages, X-ray and electron spectroscopy are helpful methods for analysis, characterization and diagnostics for the engineer and for the researcher working in basic science.This paper gives a short overview of experiments with X-ray and electron spectroscopy for solar energy and water splitting materials and addresses also the issue of solar fuel, a relatively new topic in energy research. The featured systems are iron oxide and tungsten oxide as photoanodes, and hydrogenases as molecular systems. We present surface and subsurface studies with ambient pressure XPS and hard X-ray XPS, resonant photoemission, light induced effects in resonant photoemission experiments and a photo-electrochemical in situ/operando NEXAFS experiment in a liquid cell, and nuclear resonant vibrational spectroscopy (NRVS). (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.