956 resultados para Large modeling projects
Resumo:
Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own.
Resumo:
Catalysis at organophilic silica-rich surfaces of zeolites and feldspars might generate replicating biopolymers from simple chemicals supplied by meteorites, volcanic gases, and other geological sources. Crystal–chemical modeling yielded packings for amino acids neatly encapsulated in 10-ring channels of the molecular sieve silicalite-ZSM-5-(mutinaite). Calculation of binding and activation energies for catalytic assembly into polymers is progressing for a chemical composition with one catalytic Al–OH site per 25 neutral Si tetrahedral sites. Internal channel intersections and external terminations provide special stereochemical features suitable for complex organic species. Polymer migration along nano/micrometer channels of ancient weathered feldspars, plus exploitation of phosphorus and various transition metals in entrapped apatite and other microminerals, might have generated complexes of replicating catalytic biomolecules, leading to primitive cellular organisms. The first cell wall might have been an internal mineral surface, from which the cell developed a protective biological cap emerging into a nutrient-rich “soup.” Ultimately, the biological cap might have expanded into a complete cell wall, allowing mobility and colonization of energy-rich challenging environments. Electron microscopy of honeycomb channels inside weathered feldspars of the Shap granite (northwest England) has revealed modern bacteria, perhaps indicative of Archean ones. All known early rocks were metamorphosed too highly during geologic time to permit simple survival of large-pore zeolites, honeycombed feldspar, and encapsulated species. Possible microscopic clues to the proposed mineral adsorbents/catalysts are discussed for planning of systematic study of black cherts from weakly metamorphosed Archaean sediments.
Resumo:
Coupling of cerebral blood flow (CBF) and cerebral metabolic rate for oxygen (CMRO2) in physiologically activated brain states remains the subject of debates. Recently it was suggested that CBF is tightly coupled to oxidative metabolism in a nonlinear fashion. As part of this hypothesis, mathematical models of oxygen delivery to the brain have been described in which disproportionately large increases in CBF are necessary to sustain even small increases in CMRO2 during activation. We have explored the coupling of CBF and oxygen delivery by using two complementary methods. First, a more complex mathematical model was tested that differs from those recently described in that no assumptions were made regarding tissue oxygen level. Second, [15O] water CBF positron emission tomography (PET) studies in nine healthy subjects were conducted during states of visual activation and hypoxia to examine the relationship of CBF and oxygen delivery. In contrast to previous reports, our model showed adequate tissue levels of oxygen could be maintained without the need for increased CBF or oxygen delivery. Similarly, the PET studies demonstrated that the regional increase in CBF during visual activation was not affected by hypoxia. These findings strongly indicate that the increase in CBF associated with physiological activation is regulated by factors other than local requirements in oxygen.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.
Resumo:
The availability of a large amount of observational data recently collected from magnetar outbursts is now calling for a complete theoretical study of outburst characteristics. In this Letter (the first of a series dedicated to modeling magnetar outbursts), we tackle the long-standing open issue of whether or not short bursts and glitches are always connected to long-term radiative outbursts. We show that the recent detection of short bursts and glitches seemingly unconnected to outbursts is only misleading our understanding of these events. We show that, in the framework of the starquake model, neutrino emission processes in the magnetar crust limit the temperature, and therefore the luminosity. This natural limit to the maximum luminosity makes outbursts associated with bright persistent magnetars barely detectable. These events are simply seen as a small luminosity increase over the already bright quiescent state, followed by a fast return to quiescence. In particular, this is the case for 1RXS J1708–4009, 1E 1841–045, SGR 1806–20, and other bright persistent magnetars. On the other hand, a similar event (with the same energetics) in a fainter source will drive a more extreme luminosity variation and longer cooling time, as for sources such as XTE J1810–197, 1E 1547–5408, and SGR 1627–41. We conclude that the non-detection of large radiative outbursts in connection with glitches and bursts from bright persistent magnetars is not surprising per se, nor does it need any revision of the glitches and burst mechanisms as explained by current theoretical models.
Resumo:
In this paper the model of an Innovative Monitoring Network involving properly connected nodes to develop an Information and Communication Technology (ICT) solution for preventive maintenance of historical centres from early warnings is proposed. It is well known that the protection of historical centres generally goes from a large-scale monitoring to a local one and it could be supported by a unique ICT solution. More in detail, the models of a virtually organized monitoring system could enable the implementation of automated analyses by presenting various alert levels. An adequate ICT solution tool would allow to define a monitoring network for a shared processing of data and results. Thus, a possible retrofit solution could be planned for pilot cases shared among the nodes of the network on the basis of a suitable procedure utilizing a retrofit catalogue. The final objective would consist in providing a model of an innovative tool to identify hazards, damages and possible retrofit solutions for historical centres, assuring an easy early warning support for stakeholders. The action could proactively target the needs and requirements of users, such as decision makers responsible for damage mitigation and safeguarding of cultural heritage assets.
Resumo:
Determination of reliable solute transport parameters is an essential aspect for the characterization of the mechanisms and processes involved in solute transport (e.g., pesticides, fertilizers, contaminants) through the unsaturated zone. A rapid inexpensive method to estimate the dispersivity parameter at the field scale is presented herein. It is based on the quantification by the X-ray fluorescence solid-state technique of total bromine in soil, along with an inverse numerical modeling approach. The results show that this methodology is a good alternative to the classic Br− determination in soil water by ion chromatography. A good agreement between the observed and simulated total soil Br is reported. The results highlight the potential applicability of both combined techniques to infer readily solute transport parameters under field conditions.
Resumo:
This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU) PBL. YSU is a first-order scheme that uses non-local eddy diffusivity coefficients to compute turbulent fluxes. It is based on the MRF, and improves it with an explicit treatment of the entrainment. With the aim of evaluating the RAMS results for these PBL parameterizations, a series of numerical simulations have been performed and contrasted with the results obtained using the Mellor and Yamada (MY) scheme, also widely used, and the standard PBL scheme in the RAMS model. The numerical study carried out here is focused on mesoscale circulation events during the summer, as these meteorological situations dominate this season of the year in the Western Mediterranean coast. In addition, the sensitivity of these PBL parameterizations to the initial soil moisture content is also evaluated. The results show a warmer and moister PBL for the YSU scheme compared to both MRF and MY. The model presents as well a tendency to overestimate the observed temperature and to underestimate the observed humidity, considering all PBL schemes and a low initial soil moisture content. In addition, the bias between the model and the observations is significantly reduced moistening the initial soil moisture of the corresponding run. Thus, varying this parameter has a positive effect and improves the simulated results in relation to the observations. However, there is still a significant overestimation of the wind speed over flatter terrain, independently of the PBL scheme and the initial soil moisture used, even though a different degree of accuracy is reproduced by RAMS taking into account the different sensitivity tests.
Resumo:
Thesis (Master, Computing) -- Queen's University, 2016-05-29 18:11:34.114
Resumo:
The logical (or logic) formalism is increasingly used to model regulatory and signaling networks. Complementing these applications, several groups contributed various methods and tools to support the definition and analysis of logical models. After an introduction to the logical modeling framework and to several of its variants, we review here a number of recent methodological advances to ease the analysis of large and intricate networks. In particular, we survey approaches to determine model attractors and their reachability properties, to assess the dynamical impact of variations of external signals, and to consistently reduce large models. To illustrate these developments, we further consider several published logical models for two important biological processes, namely the differentiation of T helper cells and the control of mammalian cell cycle.
Resumo:
Arkansas State Highway and Transportation Department, Little Rock
Resumo:
"February 1979."
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.