931 resultados para success models comparison
Resumo:
The near nucleus coma of Comet 9P/Tempel 1 has been simulated with the 3D Direct Simulation Monte Carlo (DSMC) code PDSC++ (Su, C.-C. [2013]. Parallel Direct Simulation Monte Carlo (DSMC) Methods for Modeling Rarefied Gas Dynamics. PhD Thesis, National Chiao Tung University, Taiwan) and the derived column densities have been compared to observations of the water vapour distribution found by using infrared imaging spectrometer on the Deep Impact spacecraft (Feaga, L.M., A’Hearn, M.F., Sunshine, J.M., Groussin, O., Farnham, T.L. [2007]. Icarus 191(2), 134–145. http://dx.doi.org/10.1016/j.icarus.2007.04.038). Modelled total production rates are also compared to various observations made at the time of the Deep Impact encounter. Three different models were tested. For all models, the shape model constructed from the Deep Impact observations by Thomas et al. (Thomas, P.C., Veverka, J., Belton, M.J.S., Hidy, A., A’Hearn, M.F., Farnham, T.L., et al. [2007]. Icarus, 187(1), 4–15. http://dx.doi.org/10.1016/j.icarus.2006.12.013) was used. Outgassing depending only on the cosine of the solar insolation angle on each shape model facet is shown to provide an unsatisfactory model. Models constructed on the basis of active areas suggested by Kossacki and Szutowicz (Kossacki, K., Szutowicz, S. [2008]. Icarus, 195(2), 705–724. http://dx.doi.org/10.1016/j.icarus.2007.12.014) are shown to be superior. The Kossacki and Szutowicz model, however, also shows deficits which we have sought to improve upon. For the best model we investigate the properties of the outflow.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.
Resumo:
Chironomid-temperature inference models based on North American, European and combined surface sediment training sets were compared to assess the overall reliability of their predictions. Between 67 and 76 of the major chironomid taxa in each data set showed a unimodal response to July temperature, whereas between 5 and 22 of the common taxa showed a sigmoidal response. July temperature optima were highly correlated among the training sets, but the correlations for other taxon parameters such as tolerances and weighted averaging partial least squares (WA-PLS) and partial least squares (PLS) regression coefficients were much weaker. PLS, weighted averaging, WA-PLS, and the Modern Analogue Technique, all provided useful and reliable temperature inferences. Although jack-knifed error statistics suggested that two-component WA-PLS models had the highest predictive power, intercontinental tests suggested that other inference models performed better. The various models were able to provide good July temperature inferences, even where neither good nor close modern analogues for the fossil chironomid assemblages existed. When the models were applied to fossil Lateglacial assemblages from North America and Europe, the inferred rates and magnitude of July temperature changes varied among models. All models, however, revealed similar patterns of Lateglacial temperature change. Depending on the model used, the inferred Younger Dryas July temperature decrease ranged between 2.5 and 6°C.
Resumo:
Ozone stomatal fluxes were modeled for a 3-year period following different approaches for a commercial variety of durum wheat (Triticum durum Desf. cv. Camacho) at the phenological stage of anthesis. All models performed in the same range, although not all of them afforded equally significant results. Nevertheless, all of them suggest that stomatal conductance would account for the main percentage of ozone deposition fluxes. A new modeling approach was tested, based on a 3-D architectural model of the wheat canopy, and fairly accurate results were obtained. Plant species-specific measurements, as well as measurements of stomatal conductance and environmental parameters, were required. The method proposed for calculating ozone stomatal fluxes (FO(3_3-D)) from experimental gs data and modeling them as a function of certain environmental parameters in conjunction with the use of the YPLANT model seems to be adequate, providing realistic estimates of the canopy FO(3_3-D), integrating and not neglecting the contribution of the lower leaves with respect to the flag leaf, although a further development of this model is needed.
Resumo:
At present there is much literature that refers to the advantages and disadvantages of different methods of statistical and dynamical downscaling of climate variables projected by climate models. Less attention has been paid to other indirect variables, like runoff, which play a significant role in evaluating the impact of climate change on hydrological systems. Runoff presents a much greater bias in climate models than other climate variables, like temperature or precipitation. It is very important to identify the methods that minimize bias while downscaling runoff from the gridded results of climate models to the basin scale
Resumo:
Aplicación de simulación de Monte Carlo y técnicas de Análisis de la Varianza (ANOVA) a la comparación de modelos estocásticos dinámicos para accidentes de tráfico.
Resumo:
An elliptic computational fluid dynamics wake model based on the actuator disk concept is used to simulate a wind turbine, approximated by a disk upon which a distribution of forces, defined as axial momentum sources, is applied on an incoming non-uniform shear flow. The rotor is supposed to be uniformly loaded with the exerted forces estimated as a function of the incident wind speed, thrust coefficient and rotor diameter. The model is assessed in terms of wind speed deficit and added turbulence intensity for different turbulence models and is validated from experimental measurements of the Sexbierum wind turbine experiment.
Resumo:
A simplified CFD wake model based on the actuator disk concept is used to simulate the wind turbine, represented by a disk upon which a distribution of forces, defined as axial momentum sources, are applied on the incoming non-uniform flow. The rotor is supposed to be uniformly loaded, with the exerted forces function of the incident wind speed, the thrust coefficient and the rotor diameter. The model is tested under different parameterizations of turbulence models and validated through experimental measurements downwind of a wind turbine in terms of wind speed deficit and turbulence intensity.
Resumo:
A hybrid Eulerian-Lagrangian approach is employed to simulate heavy particle dispersion in turbulent pipe flow. The mean flow is provided by the Eulerian simulations developed by mean of JetCode, whereas the fluid fluctuations seen by particles are prescribed by a stochastic differential equation based on normalized Langevin. The statistics of particle velocity are compared to LES data which contain detailed statistics of velocity for particles with diameter equal to 20.4 µm. The model is in good agreement with the LES data for axial mean velocity whereas rms of axial and radial velocities should be adjusted.
Resumo:
During launch, satellite and their equipment are subjected to loads of random nature and with a wide frequency range. Their vibro-acoustic response is an important issue to be analysed, for example for folded solar arrays and antennas. The main issue at low modal density is the modelling combinations engaging air layers, structures and external fluid. Depending on the modal density different methodologies, as FEM, BEM and SEA should be considered. This work focuses on the analysis of different combinations of the methodologies previously stated used in order to characterise the vibro-acoustic response of two rectangular sandwich structure panels isolated and engaging an air layer between them under a diffuse acoustic field. Focusing on the modelling of air layers, different models are proposed. To illustrate the phenomenology described and studied, experimental results from an acoustic test on an ARA-MKIII solar array in folded configuration are presented along with numerical results.
Resumo:
This paper presents a dynamic LM adaptation based on the topic that has been identified on a speech segment. We use LSA and the given topic labels in the training dataset to obtain and use the topic models. We propose a dynamic language model adaptation to improve the recognition performance in "a two stages" AST system. The final stage makes use of the topic identification with two variants: the first on uses just the most probable topic and the other one depends on the relative distances of the topics that have been identified. We perform the adaptation of the LM as a linear interpolation between a background model and topic-based LM. The interpolation weight id dynamically adapted according to different parameters. The proposed method is evaluated on the Spanish partition of the EPPS speech database. We achieved a relative reduction in WER of 11.13% over the baseline system which uses a single blackground LM.
Resumo:
Alterations in sodium channel expression and function have been suggested as a key molecular event underlying the abnormal processing of pain after peripheral nerve or tissue injury. Although the relative contribution of individual sodium channel subtypes to this process is unclear, the biophysical properties of the tetrodotoxin-resistant current, mediated, at least in part, by the sodium channel PN3 (SNS), suggests that it may play a specialized, pathophysiological role in the sustained, repetitive firing of the peripheral neuron after injury. Moreover, this hypothesis is supported by evidence demonstrating that selective “knock-down” of PN3 protein in the dorsal root ganglion with specific antisense oligodeoxynucleotides prevents hyperalgesia and allodynia caused by either chronic nerve or tissue injury. In contrast, knock-down of NaN/SNS2 protein, a sodium channel that may be a second possible candidate for the tetrodotoxin-resistant current, appears to have no effect on nerve injury-induced behavioral responses. These data suggest that relief from chronic inflammatory or neuropathic pain might be achieved by selective blockade or inhibition of PN3 expression. In light of the restricted distribution of PN3 to sensory neurons, such an approach might offer effective pain relief without a significant side-effect liability.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.