985 resultados para series-parallel model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

采用多种科学预测方法与财政经济实际相结合的方式建立了一个综合的财政收支系统动力学模型.这个模型集中了时间序列分析模型,灰色系统预测模型的参数综合性强和系统动力学模型结构分明,用动态反馈方式预测系统发展变化的特点,对东北两大城市预算内财政收支“八五”计划指标进行了全面定量的预测与分析,得到了很好的应用效果。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

研究了一种高冗余自由度并串联机构回避障碍的算法问题,并实现了运动仿真,核算法应用了优先顺位的理论,利用旋转张量法,对杆长随时间变化的高冗余自由度操作臂实现了变臂长的避障规划,并在CAD工作站上完成了回避障碍钻洞的运动仿真,获得了满意的结果。国家自然科学基金

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider various single machine scheduling problems in which the processing time of a job depends either on its position in a processing sequence or on its start time. We focus on problems of minimizing the makespan or the sum of (weighted) completion times of the jobs. In many situations we show that the objective function is priority-generating, and therefore the corresponding scheduling problem under series-parallel precedence constraints is polynomially solvable. In other situations we provide counter-examples that show that the objective function is not priority-generating.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thin-film capacitors, with barium strontium titanate (BST) dielectric layers between 7.5 and 950 nm in thickness, were fabricated by pulsed-laser deposition. Both crystallography and cation chemistry were consistent with successful growth of the BST perovskite. At room temperature, all capacitors displayed frequency dispersion such that epsilon (100 kHz)/epsilon (100 Hz) was greater than 0.75. The dielectric constant as a function of thickness was fitted, using the series capacitor model, for BST thicknesses greater than 70 nm. This yielded a large interfacial d(i)/epsilon (i) ratio of 0.40 +/-0.05 nm, implying a highly visible parasitic dead layer within the capacitor structure. Modeled consideration of the dielectric behavior for BST films, whose total thickness was below that of the dead layer, predicted anomalies in the plots of d/epsilon against d at the dead-layer thickness. In the capacitors studied here, no anomaly was observed. Hence, either (i) 7.5 nm is an upper limit for the total dead-layer thickness in the SRO/BST/Au system, or (ii) dielectric collapse is not associated with a distinct interfacial dead layer, and is instead due to a through-film effect. (C) 2001 American Institute of Physics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this study was to evaluate the effects of antimicrobial drug use, gastric acid-suppressive agent use, and infection control practices on the incidence of Clostridium difficile-associated diarrhea (CDAD) in a 426-bed general teaching hospital in Northern Ireland. The study was retrospective and ecological in design. A multivariate autoregressive integrated moving average (time-series analysis) model was built to relate CDAD incidence with antibiotic use, gastric acid-suppressive agent use, and infection control practices within the hospital over a 5-year period (February 2002 to March 2007). The findings of this study showed that temporal variation in CDAD incidence followed temporal variations in expanded-spectrum cephalosporin use (average delay = 2 months; variation of CDAD incidence = 0.01/100 bed-days), broad-spectrum cephalosporin use (average delay = 2 months; variation of CDAD incidence = 0.02/100 bed-days), fluoroquinolone use (average delay = 3 months; variation of CDAD incidence = 0.004/100 bed-days), amoxicillin-clavulanic acid use (average delay = 1 month; variation of CDAD incidence = 0.002/100 bed-days), and macrolide use (average delay = 5 months; variation of CDAD incidence = 0.002/100 bed-days). Temporal relationships were also observed between CDAD incidence and use of histamine-2 receptor antagonists (H2RAs; average delay = 1 month; variation of CDAD incidence = 0.001/100 bed-days). The model explained 78% of the variance in the monthly incidence of CDAD. The findings of this study highlight a temporal relationship between certain classes of antibiotics, H2RAs, and CDAD incidence. The results of this research can help hospitals to set priorities for restricting the use of specific antibiotic classes, based on the size-effect of each class and the delay necessary to observe an effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The basic theory behind conventional colourimetric and fluorimetric optical sensors for CO2 is examined and special attention is given to the effect on sensor response of the key parameters of initial base concentration and dye acid dissociation constant, K(D). Experimental results obtained in aqueous solution using a variety of different dyes and initial base concentrations are consistent with the predictions made by the theoretical model. A series of model-generated pK(D) versus %CO2 curves for different initial base concentrations allow those interested in constructing an optical CO2 sensor to readily identify the optimum dye/initial base combination for their sensor; the response of the sensor can be subsequently fine-tuned through a minor variation in the initial base concentration. The model and all its predictions appear also to apply to the new generation of plastic film CO2 sensors which have just been developed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Epitaxial BaTiO3 films and epitaxial BaTiO3/SrTiO3 multilayers were grown by pulsed laser deposition on vicinal surfaces of (001)-oriented Nb-doped SrTiO3 (SrTiO3:Nb) single-crystal substrates. Atomic force microscopy was used to investigate the surface topography of the deposited films. The morphology of the films, of the BaTiO3/SrTiO3 interfaces, and of the column boundaries was investigated by cross-sectional high-resolution transmission electron microscopy. Measurements of the dielectric properties were performed by comparing BaTiO3 films and BaTiO3/SrTiO3 multilayers of different numbers of individual layers, but equal overall thickness. The dielectric loss saturates for a thickness above 300 nm and linearly decreases with decreasing film thickness below a thickness of 75 nm. At the same thickness of 75 nm, the thickness dependence of the dielectric constant also exhibits a change in the linear slope both for BaTiO3 films and BaTiO3/SrTiO3 multilayers. This behaviour is explained by the change observed in the grain morphology at a thickness of 75 nm. For the thickness dependence of the dielectric constant, two phenomenological models are considered, viz. a 'series-capacitor' model and a 'dead-layer' model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Epitaxial BaTiO3 films and BaTiO3/SrTiO3 multilayers were grown by pulsed laser deposition (PLD) on (001)-oriented Nb-doped SrTiO3 (SrTiO3:Nb) substrates. Measurements of the dielectric properties were performed comparing BaTiO3 films and BaTiO3/SrTiO3 multilayers of different number of individual layers, but equal overall thickness. The dielectric loss saturates for a thickness above 300 nm, and linearly decreases with decreasing film thickness below a thickness of 75 nm, and it is independent on the number of multilayers, pointing to some interface effect. The thickness dependence of the dielectric constant of BaTiO3 films and BaTiO3/SrTiO3 multilayers; exhibits a change in the linear slope at a thickness of 75 nm. This behavior is explained by the change observed in the morphology at a thickness of 75 nm. In order to explain the thickness dependence of the dielectric constant, two approaches are considered in this paper, viz. a "series capacitor" model and a "dead layer" model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent studies have shown that the rhodium (II) acetate decomposition chemistry observed for a-diazoketones tethered to thienyl, furanyl, and benzofuranyl moieties is dependent not only on the nature of the heteroatom but also on the length of the aliphatic tether linking the diazoketone moiety with the aromatic fragment. The present thesis expands on these results and focuses on a-diazoketones tethered to benzothiophenes, pyrroles and indoles by a methylene linker. In the case of benzothiophenes, it was shown that the rhodium catalyst decomposition of I-diazo-4-(3-benzothienyl)-2-butanone (146) and 1-diazo-4-(3benzothienyl)- 2-butanone (152) allow for the isolation of 1,2,3a,3b-tetrahydro-3Hbenzo[ b]cyclopenta[1,3]cyclopropa- [1 ,2-d]thiophen-3-one (147) and 1,2,3a,3btetrahydro- 3H-benzo[b]cyclopenta[1,3]cyclopropa[1,2-d]thiophen-3-one (153). However treatment of 1-diazo-3-(3-Benzothienyl)-2-Propanone (165) with Rh(II) acetate results in the formation of 2,3-Dihydro-1H-benzo[b]cyclopenta[d]thiophen-2-one (159), while 1diazo- 3-(2-Benzothienyl)-2-Propanone with the same condition gives 5,5-bis( 1benzothiophen- 2-ylmethyl)-2(5H)-furanone (166) along with the tricycle 159. The chemistry of the pyrrolyl and the indolyl moieties linked to terminal adiazoketone systems was also investigated. The decomposition of I-diazo-(2-pyrrolyl)-2propanone (173) results in the formation of two products; the N-H insertion product IHpyrrolizin- 2(3H)-one (176) and the alkylation product 4,6-dihydrocyclopenta[b]pyrrol5( 1 H)-one (180). When 1-Diazo-3-(3-indoly)-3-propanone (194) is treated with catalytic amount of Rh (II) 3,4-dihydrocyclopenta[b]indol-2(1H)-one (193) is isolated quantitatively. The later reaction when monitored using IH NMR the intermediate 200 can be seen whose structure was confirmed by the comparison to series of model compounds. The mechanisms underlying these reactions as well as their synthetic utility is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A series of model experiments with the coupled Max-Planck-Institute ECHAM5/OM climate model have been investigated and compared with microwave measurements from the Microwave Sounding Unit (MSU) and re-analysis data for the period 1979–2008. The evaluation is carried out by computing the Temperature in the Lower Troposphere (TLT) and Temperature in the Middle Troposphere (TMT) using the MSU weights from both University of Alabama (UAH) and Remote Sensing Systems (RSS) and restricting the study to primarily the tropical oceans. When forced by analysed sea surface temperature the model reproduces accurately the time-evolution of the mean outgoing tropospheric microwave radiation especially over tropical oceans but with a minor bias towards higher temperatures in the upper troposphere. The latest reanalyses data from the 25 year Japanese re-analysis (JRA25) and European Center for Medium Range Weather Forecasts Interim Reanalysis are in very close agreement with the time-evolution of the MSU data with a correlation of 0.98 and 0.96, respectively. The re-analysis trends are similar to the trends obtained from UAH but smaller than the trends from RSS. Comparison of TLT, computed from observations from UAH and RSS, with Sea Surface Temperature indicates that RSS has a warm bias after 1993. In order to identify the significance of the tropospheric linear temperature trends we determined the natural variability of 30-year trends from a 500 year control integration of the coupled ECHAM5 model. The model exhibits natural unforced variations of the 30 year tropospheric trend that vary within ±0.2 K/decade for the tropical oceans. This general result is supported by similar results from the Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model. Present MSU observations from UAH for the period 1979–2008 are well within this range but RSS is close to the upper positive limit of this variability. We have also compared the trend of the vertical lapse rate over the tropical oceans assuming that the difference between TLT and TMT is an approximate measure of the lapse rate. The TLT–TMT trend is larger in both the measurements and in the JRA25 than in the model runs by 0.04–0.06 K/decade. Furthermore, a calculation of all 30 year TLT–TMT trends of the unforced 500-year integration vary between ±0.03 K/decade suggesting that the models have a minor systematic warm bias in the upper troposphere.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The martian solsticial pause, presented in a companion paper (Lewis et al., this issue), was investigated further through a series of model runs using the UK version of the LMD/UK Mars Global Climate Model. It was found that the pause could not be adequately reproduced if radiatively active water ice clouds were omitted from the model. When clouds were used, along with a realistic time-dependent dust opacity distribution, a substantial minimum in near-surface transient eddy activity formed around solstice in both hemispheres. The net effect of the clouds in the model is, by altering the thermal structure of the atmosphere, to decrease the vertical shear of the westerly jet near the surface around solstice, and thus reduce baroclinic growth rates. A similar effect was seen under conditions of large dust loading, implying that northern midlatitude eddy activity will tend to become suppressed after a period of intense flushing storm formation around the northern cap edge. Suppression of baroclinic eddy generation by the barotropic component of the flow and via diabatic eddy dissipation were also investigated as possible mechanisms leading to the formation of the solsticial pause but were found not to make major contributions. Zonal variations in topography were found to be important, as their presence results in weakened transient eddies around winter solstice in both hemispheres, through modification of the near-surface flow. The zonal topographic asymmetry appears to be the primary reason for the weakness of eddy activity in the southern hemisphere relative to the northern hemisphere, and the ultimate cause of the solsticial pause in both hemispheres. The meridional topographic gradient was found to exert a much weaker influence on near-surface transient eddies.