942 resultados para Inovation models in nets
Resumo:
BACKGROUND: SOX2 (Sry-box 2) is required to maintain a variety of stem cells, is overexpressed in some solid tumors, and is expressed in epithelial cells of the lung. METHODOLOGY/PRINCIPAL FINDINGS: We show that SOX2 is overexpressed in human squamous cell lung tumors and some adenocarcinomas. We have generated mouse models in which Sox2 is upregulated in epithelial cells of the lung during development and in the adult. In both cases, overexpression leads to extensive hyperplasia. In the terminal bronchioles, a trachea-like pseudostratified epithelium develops with p63-positive cells underlying columnar cells. Over 12-34 weeks, about half of the mice expressing the highest levels of Sox2 develop carcinoma. These tumors resemble adenocarcinoma but express the squamous marker, Trp63 (p63). CONCLUSIONS: These findings demonstrate that Sox2 overexpression both induces a proximal phenotype in the distal airways/alveoli and leads to cancer.
Resumo:
Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.
Resumo:
To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.
Resumo:
Economic analyses of climate change policies frequently focus on reductions of energy-related carbon dioxide emissions via market-based, economy-wide policies. The current course of environment and energy policy debate in the United States, however, suggests an alternative outcome: sector-based and/or inefficiently designed policies. This paper uses a collection of specialized, sector-based models in conjunction with a computable general equilibrium model of the economy to examine and compare these policies at an aggregate level. We examine the relative cost of different policies designed to achieve the same quantity of emission reductions. We find that excluding a limited number of sectors from an economy-wide policy does not significantly raise costs. Focusing policy solely on the electricity and transportation sectors doubles costs, however, and using non-market policies can raise cost by a factor of ten. These results are driven in part by, and are sensitive to, our modeling of pre-existing tax distortions. Copyright © 2006 by the IAEE. All rights reserved.
Resumo:
Our research was conducted to improve the timeliness, coordination, and communication during the detection, investigation and decision-making phases of the response to an aerosolized anthrax attack in the metropolitan Washington, DC, area with the goal of reducing casualties. Our research gathered information of the current response protocols through an extensive literature review and interviews with relevant officials and experts in order to identify potential problems that may exist in various steps of the detection, investigation, and response. Interviewing officials from private and government sector agencies allowed the development of a set of models of interactions and a communication network to identify discrepancies and redundancies that would elongate the delay time in initiating a public health response. In addition, we created a computer simulation designed to model an aerosol spread using weather patterns and population density to identify an estimated population of infected individuals within a target region depending on the virulence and dimensions of the weaponized spores. We developed conceptual models in order to design recommendations that would be presented to our collaborating contacts and agencies that would use such policy and analysis interventions to improve upon the overall response to an aerosolized anthrax attack, primarily through changes to emergency protocol functions and suggestions of technological detection and monitoring response to an aerosolized anthrax attack.
Resumo:
Mathematical models of straight-grate pellet induration processes have been developed and carefully validated by a number of workers over the past two decades. However, the subsequent exploitation of these models in process optimization is less clear, but obviously requires a sound understanding of how the key factors control the operation. In this article, we show how a thermokinetic model of pellet induration, validated against operating data from one of the Iron Ore Company of Canada (IOCC) lines in Canada, can be exploited in process optimization from the perspective of fuel efficiency, production rate, and product quality. Most existing processes are restricted in the options available for process optimization. Here, we review the role of each of the drying (D), preheating (PH), firing (F), after-firing (AF), and cooling (C) phases of the induration process. We then use the induration process model to evaluate whether the first drying zone is best to use on the up- or down-draft gas-flow stream, and we optimize the on-gas temperature profile in the hood of the PH, F, and AF zones, to reduce the burner fuel by at least 10 pct over the long term. Finally, we consider how efficient and flexible the process could be if some of the structural constraints were removed (i.e., addressed at the design stage). The analysis suggests it should be possible to reduce the burner fuel lead by 35 pct, easily increase production by 5+ pct, and improve pellet quality.
Resumo:
This paper describes how modeling technology has been used in providing fatigue life time data of two flip-chip models. Full-scale three-dimensional modeling of flip-chips under cyclic thermal loading has been combined with solder joint stand-off height prediction to analyze the stress and strain conditions in the two models. The Coffin-Manson empirical relationship is employed to predict the fatigue life times of the solder interconnects. In order to help designers in selecting the underfill material and the printed circuit board, the Young's modulus and the coefficient of thermal expansion of the underfill, as well as the thickness of the printed circuit boards are treated as variable parameters. Fatigue life times are therefore calculated over a range of these material and geometry parameters. In this paper we will also describe how the use of micro-via technology may affect fatigue life
Resumo:
Computer based mathematical models describing aircraft fire have a role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in post mortuum accident investigation. As the cost involved in performing large-scale fire experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be prohibitively high, the development and use of these modelling tools may become essential if these aircraft are to prove a safe and viable reality. By describing the present capabilities and limitations of aircraft fire models, this paper will examine the future development of these models in the areas of large scale applications through parallel computing, combustion modelling and extinguishment modelling.
Resumo:
High pollution levels have been often observed in urban street canyons due to the increased traffic emissions and reduced natural ventilation. Microscale dispersion models with different levels of complexity may be used to assess urban air qualityand support decision-making for pollution control strategies and traffic planning. Mathematical models calculate pollutant concentrations by solving either analytically a simplified set of parametric equations or numerically a set of differential equations that describe in detail wind flow and pollutant dispersion. Street canyon models, which might also include simplified photochemistry and particle deposition–resuspension algorithms, are often nested within larger-scale urban dispersion codes. Reduced-scale physical models in wind tunnels may also be used for investigating atmospheric processes within urban canyons and validating mathematical models. A range of monitoring techniques is used to measure pollutant concentrations in urban streets. Point measurement methods (continuous monitoring, passive and active pre-concentration sampling, grab sampling) are available for gaseous pollutants. A number of sampling techniques (mainlybased on filtration and impaction) can be used to obtain mass concentration, size distribution and chemical composition of particles. A combination of different sampling/monitoring techniques is often adopted in experimental studies. Relativelysimple mathematical models have usually been used in association with field measurements to obtain and interpret time series of pollutant concentrations at a limited number of receptor locations in street canyons. On the other hand, advanced numerical codes have often been applied in combination with wind tunnel and/or field data to simulate small-scale dispersion within the urban canopy.
Resumo:
From the model geometry creation to the model analysis, the stages in between such as mesh generation are the most manpower intensive phase in a mesh-based computational mechanics simulation process. On the other hand the model analysis is the most computing intensive phase. Advanced computational hardware and software have significantly reduced the computing time - and more importantly the trend is downward. With the kind of models envisaged coming, which are larger, more complex in geometry and modelling, and multiphysics, there is no clear trend that the manpower intensive phase is to decrease significantly in time - in the present way of operation it is more likely to increase with model complexity. In this paper we address this dilemma in collaborating components for models in electronic packaging application.
Resumo:
When designing a new passenger ship or modifying an existing design, how do we ensure that the proposed design and crew emergency procedures are safe from an evacuation resulting from fire or other incident? In the wake of major maritime disasters such as the Scandinavian Star, Herald of Free Enterprise, Estonia and in light of the growth in the numbers of high density high-speed ferries and large capacity cruise ships, issues concerning the evacuation of passengers and crew at sea are receiving renewed interest. Fire and evacuation models with features such as the ability to realistically simulate the spread of fire and fire suppression systems and the human response to fire sas well as the capability to model human performance in heeled orientations linked to a virtual reality environment that produces realistic visualisations of modelled scenarios are now available and can be used to aid the engineer in assessing ship design and procedures. This paper describes the maritmeEXODUS ship evacuation and the SMARTFIRE fire simulation model and provides an example application demonstrating the use of the models in performing fire and evacuation analysis for a large passenger ship partially based on the requirements of MSC circular 1033. The fire simulations include the action of a water mist system.
Resumo:
Advertising standardisation versus adaptation has been discussed in some detail in the marketing literature. Despite previous attempts, there is still no widely-used decision-making model available that has been accepted by marketing practitioners and academics. This paper examines the development of this important area by reviewing six prominent models in the advertising standardisation/adaptation literature. It shows why there has been a lack of development in the current literature and why it is crucial to address this problem. Important areas for future research are suggested in order to find a solution
Resumo:
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situRrs as input to the models, the performance of eleven semi-analytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
Resumo:
Marine legislation is becoming more complex and marine ecosystem-based management is specified in national and regional legislative frameworks. Shelf-seas community and ecosystem models (hereafter termed ecosystem models) are central to the delivery of ecosystem-based management, but there is limited uptake and use of model products by decision makers in Europe and the UK in comparison with other countries. In this study, the challenges to the uptake and use of ecosystem models in support of marine environmental management are assessed using the UK capability as an example. The UK has a broad capability in marine ecosystem modelling, with at least 14 different models that support management, but few examples exist of ecosystem modelling that underpin policy or management decisions. To improve understanding of policy and management issues that can be addressed using ecosystem models, a workshop was convened that brought together advisors, assessors, biologists, social scientists, economists, modellers, statisticians, policy makers, and funders. Some policy requirements were identified that can be addressed without further model development including: attribution of environmental change to underlying drivers, integration of models and observations to develop more efficient monitoring programmes, assessment of indicator performance for different management goals, and the costs and benefit of legislation. Multi-model ensembles are being developed in cases where many models exist, but model structures are very diverse making a standardised approach of combining outputs a significant challenge, and there is a need for new methodologies for describing, analysing, and visualising uncertainties. A stronger link to social and economic systems is needed to increase the range of policy-related questions that can be addressed. It is also important to improve communication between policy and modelling communities so that there is a shared understanding of the strengths and limitations of ecosystem models.